Skip to content

Apply alpha scale factor in torch add/sub converters#2680

Open
john-rocky wants to merge 1 commit intoapple:mainfrom
john-rocky:fix-add-sub-alpha
Open

Apply alpha scale factor in torch add/sub converters#2680
john-rocky wants to merge 1 commit intoapple:mainfrom
john-rocky:fix-add-sub-alpha

Conversation

@john-rocky
Copy link
Copy Markdown
Contributor

Summary

  • aten::add(self, other, alpha) and aten::sub(self, other, alpha) compute self ± alpha * other, but the converters previously raised on positional alpha != 1 and silently ignored the kwarg form from torch.export. As a result, torch.sub(x, y, alpha=5) produced x - y instead of x - 5*y (issue CoreML ignores alpha in torch.add/torch.sub and gives incorrect output #2573).
  • Look up alpha from positional inputs (TorchScript) and kwinputs (torch.export), and apply y = y * alpha before the add/sub when alpha != 1. Mirrors the pattern already used by the addmm handler.
  • The alpha == 1 fast path is unchanged.

Test plan

  • pytest test_torch_ops.py::TestAddSubAlpha — 24 cases covering int/float alpha across TorchScript / torch.export / ExecuTorch frontends.
  • pytest test_torch_ops.py::TestBoolOps test_torch_ops.py::TestAddmm — no regressions (104 cases).
  • End-to-end mlpackage.predict on the issue repro: torch.sub(x, y, alpha=5) now returns -5, was 3 before.

Fixes #2573.

@TobyRoseman
Copy link
Copy Markdown
Collaborator

This pull request looks good. Once we get a clean CI run, I will merge it. However we recently updated our CI.

@john-rocky - In order to get a CI run, I need you to rebase this pull request on top of latest main. Please also rebase all your other open pull requests.

`aten::add(self, other, alpha)` and `aten::sub(self, other, alpha)`
compute `self ± alpha * other`, but the converters previously raised
on positional `alpha != 1` and silently ignored kwarg `alpha` from the
`torch.export` path. As a result, `torch.sub(x, y, alpha=5)` produced
`x - y` instead of `x - 5*y` (issue apple#2573).

Look up `alpha` from positional inputs (TorchScript) and kwinputs
(torch.export), and apply `y = y * alpha` before the add/sub when
`alpha != 1`, mirroring the existing `addmm` handler. The alpha=1 fast
path is unchanged.

Fixes apple#2573.
@john-rocky john-rocky force-pushed the fix-add-sub-alpha branch from 35cbab6 to 765aee6 Compare May 6, 2026 00:21
@john-rocky
Copy link
Copy Markdown
Contributor Author

Rebased onto latest main. Also rebased the other 6 open PRs (#2681#2686, #2688) per your request.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

CoreML ignores alpha in torch.add/torch.sub and gives incorrect output

2 participants